VideoHelp Forum
+ Reply to Thread
Results 1 to 25 of 25
Thread
  1. Hi everybody,

    Let's assume I want to encode (x265) a video with a source frame size of 1920x800. After some testing I find the optimal bitrate to be B1 Kbps.
    Now I want to encode the same video, but this time it happens to have a source frame size of 1920x1080. That's exactly 35% more pixels.
    One could assume that the equivalent bitrate for a similar visual quality on any frame subset would be B2 = 1.35 * B1.
    And if I wanted to encode the same video with a source frame size of 3840x2160, one could assume that the equivalent bitrate would be B3 = 4 * B1.

    However I suspect that it may not be that simple.
    Maybe x265 is built in a way that allows to optimize those ratios : B1 < B2 < 1.35 * B1, and B1 < B3 < 4 * B1.
    Or maybe not. I don't know.

    Any thoughts or lectures on this ? Thanks in advance.
    Quote Quote  
  2. It's a really good question, one that I've asked myself many times.

    I don't know the answer, but this site might help point you in the right direction:

    What is the Optimal Bitrate for Your Resolution?

    Most of it is beginner-level stuff, but there is a quality/bitrate/resolution chart which "answers" your question, sort of. I've copied it below. I make no claim that what it shows is correct.

    Quote Quote  
  3. Originally Posted by Jose Hidalgo View Post
    Let's assume I want to encode (x265) a video with a source frame size of 1920x800. After some testing I find the optimal bitrate to be B1 Kbps.
    Now I want to encode the same video, but this time it happens to have a source frame size of 1920x1080. That's exactly 35% more pixels.
    Where did those extra pixels come from? Is the original video being letterboxed into a BD-compliant frame size, stretched to fill the screen, or just unmatted?
    Quote Quote  
  4. @johnmeyer : thanks for the info ! Unfortunately that's not exactly what I'm asking. I agree about the general shape of these graphs of course. But my question is not related to any subjective visual perception. It's only related to the way the x265 algorithm works. I'm asking this : if we increase the number of pixels to encode by X%, shall we increase the bitrate by exactly X% to maintain the same OBJECTIVE quality ? (thus the fact that I said "on any frame subset"). I don't know if I can express it in a clearer way, so if any expert out there wants to rephrase it better... (PS : also the linked site makes no sense, since they talk about bitrates but they don't say if they are related to x264, x265 or another codec !).

    @koberulz : here I'm talking about the SAME video, imagining that it exists in several frame sizes. You can imagine that it has been filmed by a single 4K camera and then edited in several versions : 3840x2160, 1920x1080, 1920x800... Same length, same scenes, no stretch, no treatments. Only resizing and cutting, which makes 3 "original" versions. And THEN we want to encode each of these 3 versions with x265.

    If I was talking about two different videos, you could always say that the videos are different, so the optimal bitrates should be different, etc. (ex : an action movie with lots of SFX will probably need a higher bitrate than an anime). So yes, I'm talking about the same video in 3 "original" versions.
    Last edited by Jose Hidalgo; 13th Oct 2019 at 17:56.
    Quote Quote  
  5. If you don't letterbox, stretch, or reframe it, it'll be the same size. There's no other way to make it a different size.
    Quote Quote  
  6. Can you please read again what I've said ? I've said "resizing and cutting". Resizing and stretching are different things. I of course mean that all proportions are preserved. Maybe that's what you call reframing, although I'm not sure. Nevermind, because that's definitely not the question. I'm not sure that you've really understood the question itself.
    Please feel free to contribute to this topic, but from now on ONLY in a constructive way, and ONLY in order to answer the asked question. Thank you.
    Quote Quote  
  7. No it has to be always different bitrate. To look for a rule how much makes little sense. The only way to know for sure to get closer to a certain corelation is to encode 200 movies of yours, mixed with all styles, do test and then look for a correlation that would approximate to a certain number. But average number is a nonsense as you know. Average temperature should be sixty degrees today but we froze our butt because it was a way off today. In this case it depends on resize method, shapes can change, it is videocontent dependent.

    1920 x1080 vs. 1920 x 800 is weird example, you might have the same bitrate if you cut off letterbox or you'd need to change aspect ratio.
    Quote Quote  
  8. The question arose in my mind by seeing in various places, several x265 encodes of the same movies, by the same encoder (so supposedly with the same level of quality), with surprising bitrates.
    E.g. : encode 1 = 1080p = 3000 Kbps, encode 2 = 2160p = 6000 Kbps. Which means that the bitrate was multiplied by only 2, even though the number of pixels was multiplied by 4...

    As for the 1920x1080 vs. 1920x800 example, we shall assume that there is no "video aspect ratio" involved, so no interpolation magic. We shall only assume that we have two distinct videos to begin with : one is 1920x800, and the other is 1920x1080. So the second one has 35% more pixels. And now we need to encode both videos separately with x265. Should the second encoded video have 35% more bitrate to maintain the same level of OBJECTIVE (not subjective) quality ? Or does x265 have some hidden optimizations that make it possible to reduce that bitrate ?

    I realize that this is a highly technical question, and probably only x265 experts could answer it. I hope we can find some of them in this forum.
    Quote Quote  
  9. Doing these correlations you start imagining extremes. Imagine a screen with one color. What difference in bitrate it is going to be? Not much at all if any. Then another example , movie with incredible details all over without some pattern so it is difficult to compress at all. Those comparisons will be extremely apart. And every video content , movie is different. Different noise, throw a cartoon in there etc. Double the pixels it does not mean double bitrate at all. With some guess, judging some 'normal' movie, it would be bigger but not multiplied by pixel amount multiple. That would be that theoretical case for one of those extremes, but it never is. And encoders peaks could never be cut off as well for that detailed extreme (imagine different pixels next to each other all over the place). That is another issue as you can see. Encoders encode with cut off peaks.

    With that 1920x1080 vs. 1920x800 you are even judging different content. So that adds up to be off on the top.
    Quote Quote  
  10. I don't understand what your examples have to do with my question. If a movie has "incredible details all over without some pattern", then of course it will be difficult to compress, either in 1920x800, in 1920x1080, and in 4K. The difficulty will be THE SAME in all three cases, since it's the same movie we're trying to compress in 3 different frame sizes. So my question (which may be completely stupid, but I just want to be sure) is : is x265 designed in a way that optimizes the bitrate when the number of pixels increases, or should we just assume OBJECTIVELY that when the number of pixels doubles, the bitrate must double too in order to preserve the same OBJECTIVE (not subjective once again, this is important) quality in all frame subsets ? (this is also important - we're talking objective quality here, not subjective overall visual impression).
    Quote Quote  
  11. The difficulty will be THE SAME
    No, it will be not. You starve encoder and you get different results as you would with pile amount of bitrate. You cannot get the same ratio in both cases. Again remember that one color image if having problems to realize that.


    If you mean if encoder 'cheats' more because of higher resolution and smooths out more if it does not have enough bitrate, , ..., possible, why not, that is a part of human perception and it might use it, or not, not sure if it is a common knowledge if that is in its algorithms. If yes, there might be something in the settings to compensate for that or set it. You can see some tests right now by poisondeathray and you can see encoder comparisons that some encoders can smooth out like crazy for whatever reason (fast GPU encoder etc.), so logically, more resolution, more smoothing, ratio goes down.
    Last edited by _Al_; 13th Oct 2019 at 19:35.
    Quote Quote  
  12. Sorry, I have the feeling that we're talking about the same thing but in different ways. My english may not be good enough to express myself perfectly. Sorry again.

    I say there is no resizing involved once we have the three different sources (which I just assume that we magically have). Which means three encoding scenarios :
    - Case 1 : 1920x800 source, encoded to 1920x800 with x265.
    - Case 2 : 1920x1080 source, encoded to 1920x1080 with x265.
    - Case 3 : 3840x2160 source, encoded to 3840x2160 with x265.
    I hope I've been clear about that.

    When I say "the difficulty will be THE SAME", I mean that in all three cases the encoder will have to deal OVERALL with the same "incredible details all over without some pattern" and try to compress them. It's only because of that that I say that the difficulty will be the same : because it's the same movie, as opposed to a completely different movie. Of course in case 3 there will be even more details, but I think that's not the question : what I mean is that the OVERALL difficulty for the whole movie will be the same, as opposed to another movie. In other words : a movie with "incredible details all over without some pattern" would have an overall difficulty of 8/10 or 9/10, as opposed to some anime with an overall difficulty of 2/10 or 3/10, regardless of the resolution.
    Quote Quote  
  13. I tend to be of the school of thought that in order to achieve a similar objective quality, with any encoder, during resize one would have to maintain the same pixel/bit-rate ratio. As an example, assume you have a 4096x2160p24 AVC that has a Bits/(Pixel*Frame) ratio of 0.067, if you were to resize it to 2.5k and wanted to use AVC then you should choose a bit-rate that keeps that ratio.

    The examples you cited where a file 4 times bigger only used 2 times the bit-rate were probably done by some guy uploading to a torrent site, that thinks he knows what he's doing.
    Quote Quote  
  14. DECEASED
    Join Date
    Jun 2009
    Location
    Heaven
    Search Comp PM
    Originally Posted by Jose Hidalgo View Post
    .........
    Any thoughts or lectures on this ? Thanks in advance.
    According to Ben Waggoner:

    «In general you need fewer bits per pixel as the number of pixels goes up, as any given artifact is a lot smaller. This is probably extra true for 8K; a completely messed up 4x4 block at 8K takes up as much visual field as 1 bad pixel at 1080p. The classic rule of thumb is that bitrate should go up proportionately to the 3/4ths power of the change in frame size. Thus:

    new bitrate=old bitrate * (width/oldwidth * height/oldheight)^0.75

    But with modern codecs that scale better to high resolutions, the factor is going to be lower/ maybe 2/3rds? That works out to about 2x more bits, which gets covered by the 2x efficiency improvement.

    Also, newer codecs have less objectionable distortions, so PSNR underestimates subjective improvements due to error-suppression features like in-loop deblocking. AV1 has a ton of those, which is presumably why we are seeing its greatest strengths at lower bitrates.

    This is all ballpark. But H.264 gave 720p at roughly 480p MPEG-2 bandwidth and HEVC gave 4K at roughly 1080p H.264 bandwidth.»


    source: https://forum.doom9.org/showthread.php?p=1886563#post1886563
    "Programmers are human-shaped machines that transform alcohol into bugs."
    Quote Quote  
  15. Originally Posted by Jose Hidalgo View Post
    ... I'm asking this : if we increase the number of pixels to encode by X%, shall we increase the bitrate by exactly X% to maintain the same OBJECTIVE quality
    That's exactly what that graph shows. It only shows it for discrete numbers of pixels, but you could easily interpolate between the results it shows.
    Quote Quote  
  16. Typically, encoders do not scale linearly with dimensions, the requirement is usually less.


    I think what he means is : (1) just a cropped version of (2) , and (2) is just a resized version of (3)

    (1) has ~2.35:1 AR, but (2) and (3) have 16:9 AR if they are not letterboxed


    Assuming those are true, it depends on what type of content was cropped away - how complex and compressible it is. The more complex the content that was cropped away, the more your bitrate savings for a certain level of quality. Or in your case, what was added to the 800 to make up the 1080. The more complex, the more bitrate required

    It's usually not the "same complexity". Usually a feature is shot with more important elements in the foreground. There is more closeup, more detail, and depth of field means the background peripheral elements are typically more blurred. Thus peripheral elements typically consume less bitrate required for a certain level of quality . But it depends on specifics eg. a nature documentary with forest trees and waving leaves getting added back to the 1080 will cost you proportionally higher bitrate required than the baseline



    But if they are all 2.35:1 (2) and (3) have letterbox, then black bars do not cost much. Maybe +0.1-0.3% bitrate.



    Originally Posted by johnmeyer View Post
    Originally Posted by Jose Hidalgo View Post
    ... I'm asking this : if we increase the number of pixels to encode by X%, shall we increase the bitrate by exactly X% to maintain the same OBJECTIVE quality
    That's exactly what that graph shows. It only shows it for discrete numbers of pixels, but you could easily interpolate between the results it shows.
    He's asking for objective, not subjective. The graphic says "subjective"

    Objective - such as PSNR measurement, or SSIM
    Quote Quote  
  17. out of curiosity has anyone tried what happens to the file size if you take a 4k source:
    1. encode it lossless at 4k
    2. resize it to 1080+ and encode it lossless
    3. resize it to 480p and encode it lossless
    since the quality of each clip will stay the same as we encode lossless, shouldn't the relation between size and resolution give a lower boundary for the effectiveness?
    Sure this has flaws since one would assume that not encoding lossless
    a. is more effective (probably right)
    b. each option preserves the quality as good as it can independent of the option chosen (definitely wrong)
    c. the sources have the same 'objective quality' to begin with (definitely wrong)
    but still it should give a lower boundary, shouldn't it?
    (this could be reproduced with constant quantizer encodes and adding features X in different runs)

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  18. @El Heggunte : THANK YOU SO MUCH !!! Yes, that's the kind of stuff I was talking about. It's really helpful as it gives me some meaningful insights. It's just like I suspected (although not for the exact reasons I suspected), and I will delve into it. Again, many thanks for pointing me in the right direction.

    @poisondeathray : yes, that's what I meant. (1) is a cropped version of (2), which is a resized version of (3). Thanks for expressing it clearly.
    Also, yes, I was talking about objective quality, hence the fact that the graph is useless in my research (although it's meaningful and interesting for other purposes).

    @All : if you have any more info to complement El Heggunte's answer, all the better. But his answer makes me happy already.
    Quote Quote  
  19. But do you know x265 optimizes and cheats with that theoretical 0.75 coefficient on mind? Across resolutions, or that is just recommendation, that is thrown off as soon as we change viewing devices.

    For example we remember our DivX rips how good they looked viewing on CRT etc.
    Quote Quote  
  20. Well, that's what I was suspecting, but I guess I was wrong about the cause.
    El Heggunte got the right cause : "you need fewer bits per pixel as the number of pixels goes up, as any given artifact is a lot smaller"
    This isn't related to x265 itself. The part that is related to x265 is : "with modern codecs that scale better to high resolutions, the factor is going to be lower/ maybe 2/3rds"

    So I did some calculations for my example (with the "2/3" formula given that we're talking about x265 here) :
    - Case 2 : 35% more pixels than case 1, but only about 22% more bitrate for an equivalent objective (not subjective) quality
    - Case 3 : 400% more pixels than case 2, but only about 252% more bitrate for an equivalent objective (not subjective) quality
    This explains things for me.
    Quote Quote  
  21. Did you verify this with measurements?

    Please post them if you measure them, I'd be interested in seeing them
    Quote Quote  
  22. So far I'm just playing around with the formula given by El Heggunte / Ben Waggoner, that's all.
    I guess Ben Waggoner did some serious work to get that formula right. He's the professional, so I guess I'll trust him, lol.
    If you're interested in his research, maybe you should post directly on Doom9. That's what I'll do if I seek further help on this matter.
    Quote Quote  
  23. I think that is pretty much clear , more, but not linear with pixel amount equivalent. But watch any formulas. It is average, where it aproximates. It is like that average temperature monitoring weather. You have one scene with detail of twittering of thousands tree leaves across your screen in 4k and your formula gives a big mess on screen. You have to consider peaks, cut them off while encoding to certain point. That cut off is relatively more drastic comparing to fullHD in that case etc.
    Quote Quote  
  24. I absolutely agree
    Quote Quote  
  25. He says those are "ballpark" guidelines. I interpret that as +/- % error range. I would like to see some actual data to see how well that holds up. Even if PSNR is not that useful, it's still a well known measurement and people know how it reacts
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!